Skip to content

fix: prevent service degradation on config reload failure#589

Open
qq173924384 wants to merge 1 commit intoContentSquare:masterfrom
qq173924384:fix/config-reload-atomicity
Open

fix: prevent service degradation on config reload failure#589
qq173924384 wants to merge 1 commit intoContentSquare:masterfrom
qq173924384:fix/config-reload-atomicity

Conversation

@qq173924384
Copy link
Copy Markdown

@qq173924384 qq173924384 commented Mar 26, 2026

Summary

  • Atomic config reload: When a new ConnectionPool config is detected, the new proxy is fully initialized and validated before replacing the old one, preventing partial/broken state exposure.
  • Graceful old proxy shutdown: After the new proxy takes over, the old proxy's reloadSignal is closed and its goroutines are waited on via reloadWG, avoiding goroutine leaks.
  • Fix maxIdleConns field assignment: In newReverseProxy, maxIdleConns was incorrectly set to cfgCp.MaxIdleConnsPerHost; corrected to cfgCp.MaxIdleConns.

Test plan

  • Verify that a config reload with a changed ConnectionPool correctly creates and applies the new proxy before discarding the old one
  • Verify that a failed config reload does not leave the service in a degraded state (old proxy remains active)
  • Verify that the old proxy goroutines are properly cleaned up after reload
  • Verify that maxIdleConns is correctly set from cfg.ConnectionPool.MaxIdleConns

🤖 Generated with Claude Code

When applyConfig() fails during hot reload, the old proxy was being destroyed
before the new proxy was successfully initialized, causing partial service outage.

Changes:
- main.go: Use temporary variable to create and validate new proxy before
  replacing the global instance. On failure, cleanup new proxy and keep
  old proxy running.
- proxy.go: Fix bug where maxIdleConns was incorrectly assigned MaxIdleConnsPerHost
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Development

Successfully merging this pull request may close these issues.

1 participant